首页> 外文OA文献 >Comparison of Multi-agent and Single-agent Inverse Learning on a Simulated Soccer Example
【2h】

Comparison of Multi-agent and Single-agent Inverse Learning on a Simulated Soccer Example

机译:多智能体与单一代理逆向学习的比较   模拟足球示例

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We compare the performance of Inverse Reinforcement Learning (IRL) with therelative new model of Multi-agent Inverse Reinforcement Learning (MIRL). Beforecomparing the methods, we extend a published Bayesian IRL approach that is onlyapplicable to the case where the reward is only state dependent to a generalone capable of tackling the case where the reward depends on both state andaction. Comparison between IRL and MIRL is made in the context of an abstractsoccer game, using both a game model in which the reward depends only on stateand one in which it depends on both state and action. Results suggest that theIRL approach performs much worse than the MIRL approach. We speculate that theunderperformance of IRL is because it fails to capture equilibrium informationin the manner possible in MIRL.
机译:我们将反向强化学习(IRL)的性能与多主体逆向强化学习(MIRL)的相对新模型进行了比较。在对方法进行比较之前,我们扩展了已发布的贝叶斯IRL方法,该方法仅适用于奖励仅取决于国家的情况,而通用则能够处理奖励取决于状态和行动的情况。 IRL和MIRL在抽象足球游戏的上下文中进行了比较,使用的游戏模型中奖励仅取决于状态,而奖励则取决于状态和动作。结果表明,IRL方法的性能比MIRL方法差得多。我们推测,IRL的表现不佳是因为它无法以MIRL中可能的方式捕获均衡信息。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号